42 research outputs found

    Deterministic Rateless Codes for BSC

    Full text link
    A rateless code encodes a finite length information word into an infinitely long codeword such that longer prefixes of the codeword can tolerate a larger fraction of errors. A rateless code achieves capacity for a family of channels if, for every channel in the family, reliable communication is obtained by a prefix of the code whose rate is arbitrarily close to the channel's capacity. As a result, a universal encoder can communicate over all channels in the family while simultaneously achieving optimal communication overhead. In this paper, we construct the first \emph{deterministic} rateless code for the binary symmetric channel. Our code can be encoded and decoded in O(β)O(\beta) time per bit and in almost logarithmic parallel time of O(βlogn)O(\beta \log n), where β\beta is any (arbitrarily slow) super-constant function. Furthermore, the error probability of our code is almost exponentially small exp(Ω(n/β))\exp(-\Omega(n/\beta)). Previous rateless codes are probabilistic (i.e., based on code ensembles), require polynomial time per bit for decoding, and have inferior asymptotic error probabilities. Our main technical contribution is a constructive proof for the existence of an infinite generating matrix that each of its prefixes induce a weight distribution that approximates the expected weight distribution of a random linear code

    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs

    Get PDF
    We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, C_V, that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communication from the prover by R-F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2^{R-F}. Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero-knowledge holds against a "semi-malicious" verifier that maliciously selects its random tape and then plays honestly. Consequently, we show that some classical HVSZK proof systems, like the one for the complete Statistical-Distance problem (Sahai and Vadhan, JACM 2003) admit randomness sparsification with no penalty. Along the way we introduce new notions of pseudorandomness against interactive proof systems, and study their relations to existing notions of pseudorandomness

    Separating Two-Round Secure Computation From Oblivious Transfer

    Get PDF
    We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions. Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT. As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT

    Placing Conditional Disclosure of Secrets in the Communication Complexity Universe

    Get PDF
    In the conditional disclosure of secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x,y) satisfies some predefined predicate f. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of Omega(n) or Omega(n^{1-epsilon}), providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication - a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even AM cap coAM - a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the "civilized" part of the communication complexity world for which explicit lower-bounds are known

    On Actively-Secure Elementary MPC Reductions

    Get PDF
    We introduce the notion of \emph{elementary MPC} reductions that allow us to securely compute a functionality ff by making a single call to a constant-degree ``non-cryptographic\u27\u27 functionality gg without requiring any additional interaction. Roughly speaking, ``non-cryptographic\u27\u27 means that gg does not make use of cryptographic primitives, though the parties can locally call such primitives. Classical MPC results yield such elementary reductions in various cases including the setting of passive security with full corruption threshold t<nt<n (Yao, FOCS\u2786; Beaver, Micali, and Rogaway, STOC\u2790), the setting of full active security against a corrupted minority t<n/2t<n/2 (Damgård and Ishai, Crypto\u2705), and, for NC1 functionalities, even for the setting of full active (information-theoretic) security with full corruption threshold of t<nt<n (Ishai and Kushilevitz, FOCS\u2700). This leaves open the existence of an elementary reduction that achieves full active security in the dishonest majority setting for all efficiently computable functions. Our main result shows that such a reduction is unlikely to exist. Specifically, the existence of a computationally secure elementary reduction that makes black-box use of a PRG and achieves a very weak form of partial fairness (e.g., that holds only when the first party is not corrupted) would allow us to realize any efficiently-computable function by a \emph{constant-round} protocol that achieves a non-trivial notion of information-theoretic passive security. The existence of the latter is a well-known 3-decade old open problem in information-theoretic cryptography (Beaver, Micali, and Rogaway, STOC\u2790). On the positive side, we observe that this barrier can be bypassed under any of the following relaxations: (1) non-black-box use of a pseudorandom generator; (2) weaker security guarantees such as security with identifiable abort; or (3) an additional round of communication with the functionality gg

    Conflict Checkable and Decodable Codes and Their Applications

    Get PDF
    Let CC be an error-correcting code over a large alphabet qq of block length nn, and assume that, a possibly corrupted, codeword cc is distributively stored among nn servers where the iith entry is being held by the iith server. Suppose that every pair of servers publicly announce whether the corresponding coordinates are ``consistent\u27\u27 with some legal codeword or ``conflicted\u27\u27. What type of information about cc can be inferred from this consistency graph? Can we check whether errors occurred and if so, can we find the error locations and effectively decode? We initiate the study of conflict-checkable and conflict-decodable codes and prove the following main results: (1) (Almost-MDS conflict-checkable codes:) For every distance dnd\leq n, there exists a code that supports conflict-based error-detection whose dimension kk almost achieves the singleton bound, i.e., knd+0.99k\geq n-d+0.99. Interestingly, the code is non-linear, and we give some evidence that suggests that this is inherent. Combinatorially, this yields an nn-partite graph over [q]n[q]^n that contains qkq^k cliques of size nn whose pair-wise intersection is at most ndk0.99n-d\leq k-0.99 vertices, generalizing a construction of Alon (Random Struct. Algorithms, \u2702) that achieves a similar result for the special case of triangles (n=3n=3). (2) (Conflict Decodable Codes below half-distance:) For every distance dnd\leq n there exists a linear code that supports conflict-based error-decoding up to half of the distance. The code\u27s dimension kk ``half-meets\u27\u27 the singleton bound, i.e., k=(nd+2)/2k=(n-d+2)/2, and we prove that this bound is tight for a natural class of such codes. The construction is based on symmetric bivariate polynomials and is rooted in the literature on verifiable secret sharing (Ben-Or, Goldwasser and Wigderson, STOC \u2788; Cramer, Damgård, and Maurer, EUROCRYPT \u2700). (3) (Robust Conflict Decodable Codes:) We show that the above construction also satisfies a non-trivial notion of robust decoding/detection even when the number of errors is unbounded and up to d/2d/2 of the servers are Byzantine and may lie about their conflicts. The resulting conflict-decoder runs in exponential time in this case, and we present an alternative construction that achieves quasipolynomial complexity at the expense of degrading the dimension to k=(nd+3)/3k=(n-d+3)/3. Our construction is based on trilinear polynomials, and the algorithmic result follows by showing that the induced conflict graph is structured enough to allow efficient recovery of a maximal vertex cover. As an application of the last result, we present the first polynomial-time statistical two-round Verifiable Secret Sharing (resp., three-round general MPC protocol) that remains secure in the presence of an active adversary that corrupts up to t<n/3.001t<n/3.001 of the parties. We can upgrade the resiliency threshold to n/3n/3, which is known to be optimal in this setting, at the expense of increasing the computational complexity to be quasipolynomial. Previous solutions (Applebaum, Kachlon, and Patra, TCC\u2720) suffered from an exponential-time complexity even when the adversary corrupts only n/4n/4 of the parties

    Actively Secure Arithmetic Computation and VOLE with Constant Computational Overhead

    Get PDF
    We study the complexity of two-party secure arithmetic computation where the goal is to evaluate an arithmetic circuit over a finite field FF in the presence of an active (aka malicious) adversary. In the passive setting, Applebaum et al. (Crypto 2017) constructed a protocol that only makes a *constant* (amortized) number of field operations per gate. This protocol uses the underlying field FF as a black box, makes black-box use of (standard) oblivious transfer, and its security is based on arithmetic analogs of well-studied cryptographic assumptions. We present an actively-secure variant of this protocol that achieves, for the first time, all the above features. The protocol relies on the same assumptions and adds only a minor overhead in computation and communication. Along the way, we construct a highly-efficient Vector Oblivious Linear Evaluation (VOLE) protocol and present several practical and theoretical optimizations, as well as a prototype implementation. Our most efficient variant can achieve an asymptotic rate of 1/41/4 (i.e., for vectors of length ww we send roughly 4w4w elements of FF), which is only slightly worse than the passively-secure protocol whose rate is 1/31/3. The protocol seems to be practically competitive over fast networks, even for relatively small fields FF and relatively short vectors. Specifically, our VOLE protocol has 3 rounds, and even for 10K-long vectors, it has an amortized cost per entry of less than 4 OT\u27s and less than 300 arithmetic operations. Most of these operations (about 200) can be pre-processed locally in an offline non-interactive phase. (Better constants can be obtained for longer vectors.) Some of our optimizations rely on a novel intractability assumption regarding the non-malleability of noisy linear codes that may be of independent interest. Our technical approach employs two new ingredients. First, we present a new information-theoretic construction of Conditional Disclosure of Secrets (CDS) and show how to use it in order to immunize the VOLE protocol of Applebaum et al. against active adversaries. Second, by using elementary properties of low-degree polynomials, we show that, for some simple arithmetic functionalities, one can easily upgrade Yao\u27s garbled-circuit protocol to the active setting with a minor overhead while preserving the round complexity

    From Private Simultaneous Messages to Zero-Information Arthur-Merlin Protocols and Back

    Get PDF
    Göös, Pitassi and Watson (ITCS, 2015) have recently introduced the notion of \emph{Zero-Information Arthur-Merlin Protocols} (ZAM). In this model, which can be viewed as a private version of the standard Arthur-Merlin communication complexity game, Alice and Bob are holding a pair of inputs xx and yy respectively, and Merlin, the prover, attempts to convince them that some public function ff evaluates to 1 on (x,y)(x,y). In addition to standard completeness and soundness, Göös et al., require an additional ``zero-knowledge\u27\u27 property which asserts that on each yes-input, the distribution of Merlin\u27s proof leaks no information about the inputs (x,y)(x,y) to an external observer. In this paper, we relate this new notion to the well-studied model of \emph{Private Simultaneous Messages} (PSM) that was originally suggested by Feige, Naor and Kilian (STOC, 1994). Roughly speaking, we show that the randomness complexity of ZAM essentially corresponds to the communication complexity of PSM, and that the communication complexity of ZAM essentially corresponds to the randomness complexity of PSM. This relation works in both directions where different variants of PSM are being used. Consequently, we derive better upper-bounds on the communication-complexity of ZAM for arbitrary functions. As a secondary contribution, we reveal new connections between different variants of PSM protocols which we believe to be of independent interest

    On the Relationship between Statistical Zero-Knowledge and Statistical Randomized Encodings

    Get PDF
    \emph{Statistical Zero-knowledge proofs} (Goldwasser, Micali and Rackoff, SICOMP 1989) allow a computationally-unbounded server to convince a computationally-limited client that an input xx is in a language Π\Pi without revealing any additional information about xx that the client cannot compute by herself. \emph{Randomized encoding} (RE) of functions (Ishai and Kushilevitz, FOCS 2000) allows a computationally-limited client to publish a single (randomized) message, \enc(x), from which the server learns whether xx is in Π\Pi and nothing else. It is known that SRESRE, the class of problems that admit statistically private randomized encoding with polynomial-time client and computationally-unbounded server, is contained in the class SZKSZK of problems that have statistical zero-knowledge proof. However, the exact relation between these two classes, and, in particular, the possibility of equivalence was left as an open problem. In this paper, we explore the relationship between \SRE and \SZK, and derive the following results: * In a non-uniform setting, statistical randomized encoding with one-side privacy (1RE1RE) is equivalent to non-interactive statistical zero-knowledge (NISZKNISZK). These variants were studied in the past as natural relaxation/strengthening of the original notions. Our theorem shows that proving SRE=SZKSRE=SZK is equivalent to showing that 1RE=RE1RE=RE and SZK=NISZKSZK=NISZK. The latter is a well-known open problem (Goldreich, Sahai, Vadhan, CRYPTO 1999). * If SRESRE is non-trivial (not in BPPBPP), then infinitely-often one-way functions exist. The analog hypothesis for SZKSZK yields only \emph{auxiliary-input} one-way functions (Ostrovsky, Structure in Complexity Theory, 1991), which is believed to be a significantly weaker implication. * If there exists an average-case hard language with \emph{perfect randomized encoding}, then collision-resistance hash functions (CRH) exist. Again, a similar assumption for SZKSZK implies only constant-round statistically-hiding commitments, a primitive which seems weaker than CRH. We believe that our results sharpen the relationship between SRESRE and SZKSZK and illuminates the core differences between these two classes

    Related-Key Secure Pseudorandom Functions: The Case of Additive Attacks

    Get PDF
    In a related-key attack (RKA) an adversary attempts to break a cryptographic primitive by invoking the primitive with several secret keys which satisfy some known relation. The task of constructing provably RKA secure PRFs (for non-trivial relations) under a standard assumption has turned to be challenging. Currently, the only known provably-secure construction is due to Bellare and Cash (Crypto 2010). This important feasibility result is restricted, however, to linear relations over relatively complicated groups (e.g., Zq\Z^*_q where qq is a large prime) that arise from the algebraic structure of the underlying cryptographic assumption (DDH/DLIN). In contrast, applications typically require RKA-security with respect to simple additive relations such as XOR or addition modulo a power-of-two. In this paper, we partially fill this gap by showing that it is possible to deal with simple additive relations at the expense of relaxing the model of the attack. We introduce several natural relaxations of RKA-security, study the relations between these notions, and describe efficient constructions either under lattice assumptions or under general assumptions. Our results enrich the landscape of RKA security and suggest useful trade-offs between the attack model and the family of possible relations
    corecore